![]() |
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |
To access the contents, click the chapter and section titles.
Oracle Performance Tuning and Optimization
You can use PCM locks to lock data blocks for reading or for updating. If a PCM lock is used as a read-lock, other instances can acquire read-locks on the same data blocks. It is only when updating that an exclusive lock must be acquired. PCM locks are allocated to data files; as such, they give you some flexibility over the configuration of the locks. A PCM lock locks one or more data blocks, depending on the number of PCM locks allocated to the data file and the size of the data file. Because there is an inherent overhead associated with PCM locks, it is not beneficial to overconfigure the locks. If you know your data-access patterns, you can configure your system based on these general rules:
The dynamic performance tables V$BH, V$CACHE, and V$PING contain information about the frequency of PCM lock contention. By looking at the FREQUENCY column in these tables, you can get an idea of the number of times lock conversions took place because of contention between instances. The dynamic performance table, V$LOCK_ACTIVITY, gives information on all types of PCM lock conversions. From this information, you can see whether a particular instance is seeing a dramatic change in lock activity. An increase in lock activity may indicate that you dont have a sufficient number of PCM locks on that instance. With this information, you can use the V$BH, V$CACHE, and V$PING tables to identify the problem area. The Parallel Server option can be effective if your application is partitionable. If all the users in your system must access the same data, a parallel server may not be for you. But if you can partition your workload into divisions based on table access or if you need a fault-tolerant configuration, the Parallel Server option may be for you. If you use the Parallel Server option, you must take special care to properly configure the system. By designing the system properly, you can take maximum advantage of the parallel server features. Spin CountsMultiprocessor environments may benefit by tuning the parameter SPIN_COUNT. In normal circumstances, if a latch is not available, the process sleeps for a while and wakes up to try the latch again. Because a latch is a low-level lock, a process does not hold it very long. If you are on a multiprocessor system, it is likely that the process holding the latch is currently processing on another CPU and will be finished in a short time. By setting SPIN_COUNT to a value greater than zero, the process spins while counting down from SPIN_COUNT to zero. If the latch is still not available, the process goes to sleep. The time it takes to perform one spin count depends on the speed of your computer. On a faster machine, one spin count is quicker than one spin on a slower machine. However, the other process that is holding the resource will also be processing at a faster rate. The spin function itself only executes a few instructions, and is therefore very fast. Enabling spin counts has some good points and some bad points. You may increase CPU usage by setting SPIN_COUNT and thus slow the entire system. Because sleeping is a somewhat expensive operation, sometimes you get the latch in fewer CPU cycles than the sleep-and-wake-up cycle would take and thus improve performance. If the process spins and then must go to sleep anyway, there is definitely a performance loss. By carefully monitoring the system, you may be able to find an optimal value for SPIN_COUNT. Monitor the statistics MISSES and SLEEPS in the dynamic performance table V$LATCH. If the SLEEPS value goes down, you are on the right track. If you never see any sleeps, it is not even necessary to tune SPIN_COUNT.
|
![]() |
Products | Contact Us | About Us | Privacy | Ad Info | Home
Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction whole or in part in any form or medium without express written permission of EarthWeb is prohibited. |